Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Aug 08, 2006

The Talented Mr. Ripley

Via Thinking Meat 

Ripley is a robot designed by Deb Roy of the Cognitive Machines Group at MIT's Media Lab.

This robot was designed for learning about the environment by moving and touching objects in it. The underlying theoretical framework is the "Grounded Situation Model". In this approach, developed by Deb Roy and his colleague Nikolaos Mavridis, "the robot updates beliefs about its physical environment and body, based on a mixture of linguistic, visual and proprioceptive evidence. It can answer basic questions about the present or past and also perform actions through verbal interaction".

This story on NPR reports about the project, and includes an interview with Deb Roy.

From the MIT's Media Lab website

We have constructed a 7 degree-of-freedom robot, Ripley, to investigate connections between natural language semantics, perception, and action. Our goal is to enable Ripley to perform collaborative manipulation tasks mediated by natural spoken dialogue. Key issues include representation and learning of spatial language, object and temporal reference, and physical actions / verbs. Furthermore, a "Grounded Situation Model" representation has been designed for Ripley, as well as associated processes, and a cognitive architecture was implemented through numerous intercommunicating modules. 

medium_ripley.jpg


Links to Ripley video clips (from the MIT Media Lab website):

 

Ripley imagines and remembers [high resolution (12M) | low resolution (440K)]

Ripley tracks faces [high resolution (6M) | low resolution (280K)]

Ripley imagines objects [high resolution (13.5M) | low resolution (826k)]

Ripley grasping objects [high resolution (201M) | low resolution (23M)]

Ripley changing perspectives [.mov, 17M)]

Training HMM model to pick up [.mov, 844K)]

HMM model generating pick up [.mov, 791K)]

 

Aug 05, 2006

e-CIRCUS

 
medium_ecircus.jpg
 

The project e-Circus (Education through Characters with emotional Intelligence and Role-playing Capabilities that Understand Social interaction) aims to develop synthetic characters that interact with pupils in a virtual school, to support social and emotional learning in the real classroom. This will be achieved through virtual role-play with synthetic characters that establish credible and empathic relations with the learners.

The project consortium, which is funded under the EU 6th Framework Program, includes researchers from computer science, education and psychology from the UK, Portugal, Italy and Germany. Teachers and pupils will be included in the development of the software as well as a framework for using it in the classroom context. The e-Circus software will be tested in schools in the UK and Germany in 2007, evaluating not only the acceptance of the application among teachers and pupils but also whether the approach, as an innovative part of the curriculum, actually helps to reduce bullying in schools.

Aug 03, 2006

Virtual bots teach each other

From New Scientist Tech 

medium_learningbots.jpg

 

"Robots that teach one another new words through interaction with their surroundings have been demonstrated by UK researchers. The robots, created by Angelo Cangelosi and colleagues at Plymouth University, UK, currently exist only as computer simulations. But the researchers say their novel method of communication could someday help real-life robots cooperate when faced with a new challenge. They could also help linguists understand how human languages develop, they say..."

Continue to read the full article on New Scientist 

Watch the video 

Aug 02, 2006

The Huggable

Via Siggraph2006 Emerging Technology website

 

medium_huggable-bear.gif

 

The Huggable is a robotic pet developed by MIT researchers for therapy applications in children's hospitals and nursing homes, where pets are not always available. The robotic teddy has full-body sensate skin and smooth, quiet voice coil actuators that is able to relate to people through touch. Further features include "temperature, electric field, and force sensors which it uses to sense the interactions that people have with it. This information is then processed for its affective content, such as, for example, whether the Huggable is being petted, tickled, or patted; the bear then responds appropriately".

The Huggable has been unveiled at the Siggraph2006 conference in Boston. From the conference website:

Enhanced Life
Over the past few years, the Robotic Life Group at the MIT Media Lab has been developing "sensitive skin" and novel actuator technologies in addition to our artificial-intelligence research. The Huggable combines these technologies in a portable robotic platform that is specifically designed to leave the lab and move to healthcare applications.

Goals
The ultimate goal of this project is to evaluate the Huggable's usefulness as a therapy for those who have limited or no access to companion-animal therapy. In collaboration with nurses, doctors, and staff, the technology will soon be applied in pilot studies at hospitals and nursing homes. By combining Huggable's data-collection capabilities with its sensing and behavior, it may be possible to determine early onset of a person's behavior change or detect the onset of depression. The Huggable may also improve day-to-day life for those who may spend many hours in a nursing home alone staring out a window, and, like companion-animal therapy, it could increase their interaction with other people in the facility.

Innovations
The core technical innovation is the "sensitive skin" technology, which consists of temperature, electric-field, and force sensors all over the surface of the robot. Unlike other robotic applications where the sense of touch is concerned with manipulation or obstacle avoidance, the sense of touch in the Huggable is used to determine the affective content of the tactile interaction. The Huggable's algorithms can distinguish petting, tickling, scratching, slapping, and poking, among other types of tactile interactions. By combining the sense of touch with other sensors, the Huggable detects where a person is in relation to itself and responds with relational touch behaviors such as nuzzling.

Most robotic companions use geared DC motors, which are noisy and easily damaged. The Huggable uses custom voice-coil actuators, which provide soft, quiet, and smooth motion. Most importantly, if the Huggable encounters a person when it tries to move, there is no risk of injury to the person.

Another core technical innovation is the Huggable' combination of 802.11g networking with a robotic companion. This allows the Huggable to be much more than a fun, interactive robot. It can send live video and data about the person's interactions to the nursing staff. In this mode, the Huggable functions as a team member working with the nursing home or hospital staff and the patient or resident to promote the Huggable owner's overall health.

Vision
As poorly staffed nursing homes and hospitals become larger and more overcrowded, new methods must be invented to improve the daily lives of patients or residents. The Huggable is one of these technological innovations. Its ability to gather information and share it with the nursing staff can detect problems and report emergencies. The information can also be stored for later analysis by, for example, researchers who are studying pet therapy.

 

 

 

Jul 29, 2006

The speed of sight

Via Medgadget

 

According to estimates by University of Pennsylvania researchers, information transmission of the human retina is as fast as a regular Ethernet connection:

Using an intact retina from a guinea pig, the researchers recorded spikes of electrical impulses from ganglion cells using a miniature multi-electrode array. The investigators calculate that the human retina can transmit data at roughly 10 million bits per second. By comparison, an Ethernet can transmit information between computers at speeds of 10 to 100 million bits per second...

The guinea pig retina was placed in a dish and then presented with movies containing four types of biological motion, for example a salamander swimming in a tank to represent an object-motion stimulus. After recording electrical spikes on an array of electrodes, the researchers classified each cell into one of two broad classes: "brisk" or "sluggish," so named because of their speed.

The researchers found that the electrical spike patterns differed between cell types. For example, the larger, brisk cells fired many spikes per second and their response was highly reproducible. In contrast, the smaller, sluggish cells fired fewer spikes per second and their responses were less reproducible.

But, what's the relationship between these spikes and information being sent? "It's the combinations and patterns of spikes that are sending the information. The patterns have various meanings," says co-author Vijay Balasubramanian, PhD, Professor of Physics at Penn. "We quantify the patterns and work out how much information they convey, measured in bits per second."

Calculating the proportions of each cell type in the retina, the team estimated that about 100,000 guinea pig ganglion cells transmit about 875,000 bits of information per second. Because sluggish cells are more numerous, they account for most of the information. With about 1,000,000 ganglion cells, the human retina would transmit data at roughly the rate of an Ethernet connection, or 10 million bits per second.

 

Read the press release

 

Jul 24, 2006

Robot Doppelgänger

Hiroshi Ishiguro, director of the Intelligent Robotics Lab at Osaka University in Japan, has created a robot clone of himself. The robot, dubbed “Geminoid HI-1”, looks and moves exactly like him, and sometimes takes his place in meetings and classes.

medium_Geminoid1.jpg


Link to Wired article on Ishiguro's android double.

See also the Geminoid videos 

Jul 19, 2006

BACS project


BACS (Bayesian Approach to Cognitive Systems), is an Integrated Project under the 6th Framework Program of the European Commission which has been allocated EUR 7.5 million in funding.
 
The BACS project brings together researchers and commercial companies working on artificial perception systems potentially capable of dealing with complex tasks in everyday settings.
 
From the project's website:
 
Contemporary robots and other cognitive artifacts are not yet ready to autonomously operate in complex real world environments. One of the major reasons for this failure in creating cognitive situated systems is the difficulty in the handling of incomplete knowledge and uncertainty.
 
bacs_001

 

By taking up inspiration from the brains of mammals, including humans, the BACS project will investigate and apply Bayesian models and approaches in order to develop artificial cognitive systems that can carry out complex tasks in real world environments. The Bayesian approach will be used to model different levels of brain function within a coherent framework, from neural functions up to complex behaviors. The Bayesian models will be validated and adapted as necessary according to neuro-physiological data from rats and humans and through psychophysical experiments on humans. The Bayesian approach will also be used to develop four artificial cognitive systems concerned with (i) autonomous navigation, (ii) multi-modal perception and reconstruction of the environment, (iii) semantic facial motion tracking, and (iv) human body motion recognition and behavior analysis. The conducted research shall result in a consistent Bayesian framework offering enhanced tools for probabilistic reasoning in complex real world situations. The performance will be demonstrated through its applications to driver assistant systems and 3D mapping, both very complex real world tasks.

 

Jul 17, 2006

Computers learn common sense

Via The Engineer, July 11, 2006

BBN Technologies has been awarded $5.5 million in funding from the Defense Advanced Research Projects Agency (DARPA) for the first phase of "Integrated Learner," which will learn plans or processes after being shown a single example.

The goal is to combine specialised domain knowledge with common sense knowledge to create a reasoning system that learns as well as a person and can be applied to a variety of complex tasks. Such a system will significantly expand the kinds of tasks that a computer can learn.

Read the full article 

Jun 18, 2006

Rheo Knee

Via KurzweilAI.net

 

MIT's Media Lab researchers have developed a prosthetic "Rheo Knee" that uses AI to replicate the workings of a biological human joint and "bio-hybrids," surgical implants that allow an amputee to control an artificial leg by thinking..

 

Jun 05, 2006

HUMANOIDS 2006

medium_chess-whatto04.jpg

HUMANOIDS 2006 - Humanoid Companions

2006 IEEE-RAS International Conference on Humanoid Robots December 4-6, 2006, University of Genova, Genova, Italy.

From the conference website

The 2006 IEEE-RAS International Conference on Humanoid Robots will be held on December 4 to 6, 2006 in Genova, Italy. The conference series started in Boston in the year 2000, traveled through Tokyo (2001), Karlsruhe/Munich (2003), Santa Monica (2004), and Tsukuba (2005) and will dock in Genoa in 2006.

The conference theme, Humanoid Companions, addresses specifically aspects of human-humanoid mutual understanding and co-development.

Papers as well as suggestions for tutorials and workshops from academic and industrial communities and government agencies are solicited in all areas of humanoid robots. Topics of interest include, but are not limited to:


* Design and control of full-body humanoids
* Anthropomorphism in robotics (theories, materials, structure, behaviors)
* Interaction between life-science and robotics
* Human - humanoid interaction, collaboration and cohabitation
* Advanced components for humanoids (materials, actuators, portable energy storage, etc)
* New materials for safe interaction and physical growth
* Tools, components and platforms for collaborative research
* Perceptual and motor learning
* Humanoid platforms for robot applications (civil, industrial, clinical)
* Cognition, learning and development in humanoid systems
* Software and hardware architectures for humanoid implementation

Important Dates
* June 1st, 2006 - Proposals for Tutorials/Workshops
* June 15th , 2006 - Submission of full-length papers
* Sept. 1st , 2006 - Notification of Paper Acceptance
* October 15th, 2006 - Submission of final camera-ready papers
* November 1st 2006 - Deadline for advance registration

Paper Submission
Submitted papers MUST BE in Portable Document Format (PDF). NO OTHER FORMATS WILL BE ACCEPTED. Papers must be written in English. Six (6) camera-ready pages, including figures and references, are allowed for each paper. Up to two (2) additional pages are allowed for a charge of 80 euros for each additional page.
Papers over 8 pages will NOT be reviewed/accepted.

Detailed instructions for paper submissions and format can be found at here

Exhibitions
There will be an exhibition site at the conference and promoters are encouraged to display state-of-the art products and services in all areas of robotics and automation. Reservations for space and further information may be obtained from the Exhibits Chair and on the conference web site.

Video Submissions
Video submissions should present documentary-like report on a piece of valuable work, relevant to the humanoids community as a whole.
Video submissions should be in .avi or mpeg-4 format and should not exceed 5Mb.

INQUIRIES:
Please contact the General Co-Chairs and the Program Co-Chairs at humanoids06@listes.epfl.ch

ORGANIZATION:

General Co-Chairs:
Giulio Sandini, (U. Genoa, Italy)
Aude Billard, (EPFL, Switzerland)

Program Co-Chairs:
Jun-Ho Oh (KAIST, Korea)
Giorgio Metta (University of Genoa, Italy) Stefan Schaal (University of Southern California) Atsuo Takanishi (Waseda University)

Tutorials/Workshops Co-Chairs:
Rudiger Dillman (University of Karlsruhe) Alois Knoll (TUM, Germany)

Exhibition Co-Chairs:
Cecilia Laschi (Scuola Superiore S. Anna - Pisa, Italy) Matteo Brunnettini (U. Genoa, Italy)

Honorary Chairs:
George Bekey ((U. Genoa, Italy)USC, USA) Hirochika Inoue (JSPS, Japan) Friedrich Pfeiffer, (TU Munich, Germany)

Local Arrangements Co-Chairs:
Giorgio Cannata, (U. Genoa, Italy)
Rezia Molfino, (U. Genoa and SIRI, Italy)


Jun 01, 2006

The future of computer vision

Via Smart Mobs

Will computer see as we do? MIT researchers are developing new methods to train computers to recognize people or objects in still images and in videos with 95 to 98 percent accuracy.

This research could soon be used in surveillance cameras.

Links: Primidi